Lengrand & Miquel (2008). Classical Fω, orthogonality and symmetric candidates. Annals of Pure and Applied Logic 153:3-20.
We present a version of system Fω, called Fω^C, in which the layer of type
constructors is essentially the traditional one of Fω, whereas provability
of types is classical. The proof-term calculus accounting for the classical
reasoning is a variant of Barbanera and Berardi’s symmetric λ-calculus.
We prove that the whole calculus is strongly normalising. For the
layer of type constructors, we use Tait and Girard’s reducibility method
combined with orthogonality techniques. For the (classical) layer of terms,
we use Barbanera and Berardi’s method based on a symmetric notion of
reducibility candidate. We prove that orthogonality does not capture the
fixpoint construction of symmetric candidates.
We establish the consistency of Fω^C, and relate the calculus to the
traditional system Fω, also when the latter is extended with axioms for
classical logic.
A rather amusing thread on the plt-scheme mailing list, reminded me of Wadler's paper Why Calculating is Better than Scheming from 1987, which wasn't discussed here as far as I recall.
It's a bit of a blast from the past, but still worth reading.
Peteris Krumins has been posting his notes on MIT’s Introduction to Algorithms. The notes are valuable for anyone interested in working their way through the CLRS text and MIT Open Courseware videos.
I just finished watching the last lecture of MIT’s "Introduction to Algorithms" course. Having a great passion for all aspects of computing, I decided to share everything I learned...
Although not directly tied to programming languages, every PL has to eventually be able to express algorithms. Aside from Knuth, CLRS is probably the closest approximation to a comprehensive approach to algortihms. The text itself is language agnostic - the authors use their own brand of pseudo-code to describe the algorithms. This has the advantage of allowing the reader to focus on the algorithms at a higher level, rather than get bogged down in the specifics of any PL. The downside, at least in my estimation, is that the authors don't make it particularly easy to implement the algorithms in any specific PL. The pseudo code conflates common data structures (such as arrays) with properties/attributes that can be tagged with those structures. And some of the algorithms refer to variables that are outside of the scope of the function. Also, like Knuth, most of the algorithms are steeped in state, making it hard to implement them with functional programming approaches.
That said, the video lectures and the accompanying notes above are good resources for any that want to self-study CLRS. Here are the notes thus far:
Via Patrick (who was once a LtU contributor), two interesting blog posts:
- A discussion of the uses of Lua in Lightroom and about the use of dynamic language to develop desktop software in general (blog post by a LtU member). Apparently, some 63% of the code written by the Adobe Lightroom team is in Lua.
- A brief history of IBM's Visual Age, originally written in Smalltalk (the product, not the blog post), and Digitalk (remember Digitalk?!).
Story on Nabble here. Release note highlights:
Objective Caml 3.11.0:
----------------------
(Changes that can break existing programs are marked with a "*" )
Language features:
- Addition of lazy patterns: "lazy " matches suspensions whose values,
after forcing, match the pattern .
- Introduction of private abbreviation types "type t = private ",
for abstracting the actual manifest type in type abbreviations.
Compilers:
* The file name for a compilation unit must correspond to a valid identifier
(no more "test-me.ml" or "my file.ml".)
* Revised -output-obj: the output name must now be provided; its
extension must be one of .o/.obj, .so/.dll, or .c for the
bytecode compiler. The compilers can now produce a shared library
(with all the needed -ccopts/-ccobjs options) directly.
- With -dtypes, record (in .annot files) which function calls
are tail calls.
- All compiler error messages now include a file name and location.
- Optimized compilation of "lazy e" when the argument "e" is
already evaluated.
- Optimized compilation of equality tests with a variant constant constructor.
- The -dllib options recorded in libraries are no longer ignored when
-use_runtime or -use_prims is used (unless -no_auto_link is
explicitly used).
- Check that at most one of -pack, -a, -shared, -c, -output-obj is
given on the command line.
- Optimized compilation of private types as regular manifest types
(e.g. abbreviation to float, float array or record types with only
float fields).
Native-code compiler:
- A new option "-shared" to produce a plugin that can be dynamically
loaded with the native version of Dynlink.
- A new option "-nodynlink" to enable optimizations valid only for code
that is never dynlinked (no-op except for AMD64).
- More aggressive unboxing of floats and boxed integers.
- Can select with assembler and asm options to use at configuration time.
Run-time system:
- Changes in freelist management to reduce fragmentation.
- New implementation of the page table describing the heap (a sparse
hashtable replaces a dense bitvector), fixes issues with address
space randomization on 64-bit OS (PR#4448).
- New "generational" API for registering global memory roots with the GC,
enables faster scanning of global roots.
(The functions are caml_*_generational_global_root in .)
- New function "caml_raise_with_args" to raise an exception with several
arguments from C.
- Changes in implementation of dynamic linking of C code:
under Win32, use Alain Frisch's flexdll implementation of the dlopen
API; under MacOSX, use dlopen API instead of MacOSX bundle API.
Standard library:
- Parsing library: new function "set_trace" to programmatically turn
on or off the printing of a trace during parsing.
- Printexc library: new functions "print_backtrace" and "get_backtrace"
to obtain a stack backtrace of the most recently raised exception.
New function "record_backtrace" to turn the exception backtrace mechanism
on or off from within a program.
- Scanf library: debunking of meta format implementation;
fscanf behaviour revisited: only one input buffer is allocated for any
given input channel;
the %n conversion does not count a lookahead character as read.
Other libraries:
- Dynlink: on some platforms, the Dynlink library is now available in
native code. The boolean Dynlink.is_native allows the program to
know whether it has been compiled in bytecode or in native code.
- Bigarrays: added "unsafe_get" and "unsafe_set"
(non-bound-checking versions of "get" and "set").
- Bigarrays: removed limitation "array dimension
- Labltk: added support for TK 8.5.
- Num: added conversions between big_int and int32, nativeint, int64.
More efficient implementation of Num.quo_num and Num.mod_num.
- Threads: improved efficiency of mutex and condition variable operations;
improved interaction with Unix.fork (PR#4577).
- Unix: added getsockopt_error returning type Unix.error.
Added support for TCP_NODELAY and IPV6_ONLY socket options.
- Win32 Unix: "select" now supports all kinds of file descriptors.
Tools:
- ocamldebug now supported under Windows (MSVC and Mingw ports),
but without the replay feature. (Contributed by Sylvain Le Gall
at OCamlCore with support from Lexifi.)
- ocamldoc: new option -no-module-constraint-filter to include functions
hidden by signature constraint in documentation.
- ocamlmklib and ocamldep.opt now available under Windows ports.
- ocamlmklib no longer supports the -implib option.
- ocamlnat: an experimental native toplevel (not built by default).
Bug fixes:
- Major GC and heap compaction: fixed bug involving lazy values and
out-of-heap pointers.
- PR#3915: updated some man pages.
- PR#4261: type-checking of recursive modules
- PR#4308: better stack backtraces for "spontaneous" exceptions such as
Stack_overflow, Out_of_memory, etc.
- PR#4338: Str.global_substitute, Str.global_replace and the Str.*split*
functions are now tail-recursive.
- PR#4503: fixed bug in classify_float on ARM.
- PR#4512: type-checking of recursive modules
- PR#4517: crash in ocamllex-generated lexers.
- PR#4542: problem with return value of Unix.nice.
- PR#4557: type-checking of recursive modules.
- PR#4562: strange %n semantics in scanf.
- PR#4564: add note "stack is not executable" to object files generated by
ocamlopt (Linux/x86, Linux/AMD64).
- PR#4566: bug in Ratio.approx_ratio_fix and Num.approx_num_fix.
- PR#4582: weird behaviour of String.index_from and String.rindex_from.
- PR#4583: stack overflow in "ocamlopt -g" during closure conversion pass.
- PR#4585: ocamldoc and "val virtual" declarations.
- PR#4587: ocamldoc and escaped @ characters.
- PR#4605: Buffer.add_substitute was sometime wrong when target string had backslashes.
- PR#4614: Inconsistent declaration of CamlCBCmd in LabelTk library.
Now for some unfortunate news:
Native dynlink used to work on Mac OS X
The clean solution to make natdynlink work on recent Mac OS X systems (beside convincing Apple to support the old behavior of their linker in their new implementation) is to change OCaml's x86 backend so that it produces only PIC code (this has been done for the AMD64 port). I don't think there is currently any plan to work on that.
...
Ouch, this makes it almost a dead end for us. I can offer some time to help in this effort, working in the port, or providing feedback. The native dynlink and toplevel are, at least to me, the killer features in 3.11, but adding another hole for Mac OS X intel (in addition to not supporting x86_64) does not seem like the best choice for an increasingly popular architecture.
...
Well, we'd very much like to support native dynlink on OS X 10.5, but Apple is not helping in the least by crippling their linker compared with the one in 10.4. If anyone from Apple is on this list, feel free to contact us at caml-devel@... for more information on this regression.
So it seems like OCaml needs help along two dimensions: support for the X86_64 in general, and support for Position-Independent Code (PIC) in particular. At least on the PIC side, we can steal from the AMD64 port.
In very disappointing news proper tail calls are out of ES4. It seems that a justification for tail calls could not be found. For example, here is Adobe's position on tail calls:
A more serious point is that we can't avoid adding tail calls at some point if we cater to the functional
programming crowd. However, if we really intend to cater to them then we need to provide data structures that
are functional too, like lists and operations on them—unlike ES arrays, which are entirely imperative. Of course
users can implement those data structures themselves, if they have tail calls, but right now just adding tail calls
“for functional programming†seems like a half solution.
Finally, tail calls preclude the use of straightforward implementation techniques for procedure calls. To be sure
they are less limiting than generators, as one-shot continuations or longjumps are sufficient to handle tail calls in
a non-tail-calling implementation language, but implementations that want good-performance tail calls must
necessarily switch to a code generation technique.
This seems misguided. The user can implement functional data structures but not tail calls (without whole program transformation), so the later are much more valuable than the former. Furthermore, as a functional programmer I'm quite happy to use mutable data structures but I would certainly miss tail calls. Finally, every JS implementation is already shifting to code generation because straightforward implementation techniques are too slow for the existing idioms used in JS code.
The ES4 Wiki still indicates that tail calls are in, so perhaps they'll yet make it. For laughs you might want to look at the Ecmascript progress spreadsheet. Apple sure don't like change.
A nice viedo interview of Anders Hejlsberg and Guy Steele on Concurrency and Language Design at the JAOO conference. Nothing too technically deep, but the interview does manage to crystalize some of the high level issues that face language designers.
Speaking of interviews, the Lisp50 Conference at OOPLSA 2008 will have what should prove interesting - Alan Kay interviewing John McCarthy.
Worlds: Controlling the Scope of Side Effects by Alessandro Warth and Alan Kay, 2008.
The state of an imperative program -— e.g., the values stored in global and local variables, objects’ instance variables, and arrays—changes as its statements are executed. These changes, or side effects, are visible globally: when one part of the program modiï¬es an object, every other part that holds a reference to the same object (either directly or indirectly) is also affected. This paper introduces worlds, a language construct that reiï¬es the notion of program state, and enables programmers to control the scope of side effects. We investigate this idea as an extension of JavaScript, and provide examples that illustrate some of the interesting idioms that it makes possible.
This introduces a new programming construct that's just the kind I love: they stimulate the imagination and provide simple and strong dynamic invariants to make programs easy to reason about.
Parsing Expression Grammars: A Recognition-Based Syntactic Foundation by Bryan Ford, MIT, 2004.
For decades we have been using Chomsky's generative system of grammars, particularly context-free grammars (CFGs) and regular expressions (REs), to express the syntax of programming languages and protocols. The power of generative grammars to express ambiguity is crucial to their original purpose of modelling natural languages, but this very power makes it unnecessarily difficult both to express and to parse machine-oriented languages using CFGs. Parsing Expression Grammars (PEGs) provide an alternative, recognition-based formal foundation for describing machine-oriented syntax, which solves the ambiguity problem by not introducing ambiguity in the first place. Where CFGs express nondeterministic choice between alternatives, PEGs instead use prioritized choice. PEGs address frequently felt expressiveness limitations of CFGs and REs, simplifying syntax definitions and making it unnecessary to separate their lexical and hierarchical components. A linear-time parser can be built for any PEG, avoiding both the complexity and fickleness of LR parsers and the inefficiency of generalized CFG parsing. While PEGs provide a rich set of operators for constructing grammars, they are reducible to two minimal recognition schemas developed around 1970, TS/TDPL and gTS/GTDPL, which are here proven equivalent in effective recognition power.
An excellent paper! I read it for the first time today and was surprised not to find it in the LtU archive.
|
Recent comments
2 weeks 2 days ago
2 weeks 3 days ago
2 weeks 4 days ago
2 weeks 4 days ago
3 weeks 2 days ago
3 weeks 2 days ago
3 weeks 2 days ago
6 weeks 3 days ago
7 weeks 1 day ago
7 weeks 1 day ago